16 research outputs found

    Agile-SD: A Linux-based TCP Congestion Control Algorithm for Supporting High-speed and Short-distance Networks

    Get PDF
    Recently, high-speed and short-distance networks are widely deployed and their necessity is rapidly increasing everyday. This type of networks is used in several network applications; such as Local Area Networks (LAN) and Data Center Networks (DCN). In LANs and DCNs, high-speed and short-distance networks are commonly deployed to connect between computing and storage elements in order to provide rapid services. Indeed, the overall performance of such networks is significantly influenced by the Congestion Control Algorithm (CCA) which suffers from the problem of bandwidth under-utilization, especially if the applied buffer regime is very small. In this paper, a novel loss-based CCA tailored for high-speed and Short-Distance (SD) networks, namely Agile-SD, has been proposed. The main contribution of the proposed CCA is to implement the mechanism of agility factor. Further, intensive simulation experiments have been carried out to evaluate the performance of Agile-SD compared to Compound and Cubic which are the default CCAs of the most commonly used operating systems. The results of the simulation experiments show that the proposed CCA outperforms the compared CCAs in terms of average throughput, loss ratio and fairness, especially when a small buffer is applied. Moreover, Agile-SD shows lower sensitivity to the buffer size change and packet error rate variation which increases its efficiency.Comment: 12 Page

    Energy and Spectral Efficiency Balancing Algorithm for Energy Saving in LTE Downlinks

    Get PDF
    In wireless network communication environments, Spectral Efficiency (SE) and Energy Efficiency (EE) are among the major indicators used for evaluating network performance. However, given the high demand for data rate services and the exponential growth of energy consumption, SE and EE continue to elicit increasing attention in academia and industries. Consequently, a study of the trade-off between these metrics is imperative. In contrast with existing works, this study proposes an efficient SE and EE trade-off algorithm for saving energy in downlink Long Term Evolution (LTE) networks to concurrently optimize SE and EE while considering battery life at the Base Station (BS). The scheme is formulated as a Multi-objective Optimization Problem (MOP) and its Pareto optimal solution is examined. In contrast with other algorithms that prolong battery life by considering the idle state of a BS, thereby increasing average delay and energy consumption, the proposed algorithm prolongs battery life by adjusting the initial and final states of a BS to minimize the average delay and the energy consumption. Similarly, the use of an omni-directional antenna to spread radio signals to the user equipment in all directions causes high interference and low spatial reuse. We propose using a directional antenna instead of an omni-directional antenna by transmitting signals in one direction which results in no or low interference and high spatial reuse. The proposed scheme has been extensively evaluated through simulation, where simulation results prove that the proposed scheme is efficiently able to decrease the average response delay, improve SE, and minimize energy consumption.Comment: 19 page

    Elastic-TCP: flexible congestion control algorithm to adapt for high-BDP networks

    Get PDF
    In the last decade, the demand for Internet applications has been increased, which increases the number of data centers across the world. These data centers are usually connected to each other using long-distance and high-speed networks. As known, the Transmission Control Protocol (TCP) is the predominant protocol used to provide such connectivity among these data centers. Unfortunately, the huge bandwidth-delay product (BDP) of these networks hinders TCP from achieving full bandwidth utilization. In order to increase TCP flexibility to adapt for high-BDP networks, we propose a new delay-based and RTT-independent congestion control algorithm (CCA), namely Elastic-TCP. It mainly contributes the novel window-correlated weighting function (WWF) to increase TCP bandwidth utilization over high-BDP networks. Extensive simulation and testbed experiments have been carried out to evaluate the proposed Elastic-TCP by comparing its performance to the commonly used TCPs developed by Microsoft, Linux, and Google. The results show that the proposed Elastic-TCP achieves higher average throughput than the other TCPs, while it maintains the sharing fairness and the loss ratio. Moreover, it is worth noting that the new Elastic-TCP presents lower sensitivity to the variation of buffer size and packet error rate than the other TCPs, which grants high efficiency and stability

    Review on QoS provisioning approaches for supporting video traffic in IEEE802.11e: challenges and issues

    Get PDF
    Recently, the demand for multimedia applications is dramatically increased, which in turn increases the portion of video traffic on the Internet. The video streams, which require stringent Quality of Service (QoS), are expected to occupy more than two-thirds of web traffic by 2019. IEEE802.11e has introduced HCF Controlled Channel Access (HCCA) to provide QoS for delay-sensitive applications including highly compressed video streams. However, IEEE802.11e performance is hindered by the dynamic nature of Variable Bit Rate (VBR) video streams in which packet size and interval time are rapidly fluctuating during the traffic lifetime. In order to make IEEE802.11e able to accommodate with the irregularity of VBR video traffic, many approaches have been used in the literature. In this article, we highlight and discuss the QoS challenges in IEEE802.11e. Then, we classify the existing QoS approaches in IEEE802.11e and we also discuss the selection of recent promising and interesting enhancements of HCCA. Eventually, a set of open research issues and potential future directions is presented

    Crucial File Selection Strategy (CFSS) for Enhanced Download Response Time in Cloud Replication Environments

    Get PDF
    الحوسبة السحابية هي عبارة عن منصة ضخمة لتقديم بيانات كبيرة الحجم من أجهزة متعددة وتقنيات مختلفه. هناك طلب كبير من قبل مستأجري السحابة للوصول إلى بياناتهم بشكل أسرع دون أي انقطاع. يبدل مقدمو الخدمات السحابية كل جهدهم لضمان تأمين كل البيانات الفردية وإمكانية الوصول إليها دائمًا. ومن الملاحظ بإن استراتيجية النسخ المتماثل المناسبة القادرة على اختيار البيانات الأساسية مطلوبة في بيئات النسخ السحابي كأحد الحلول. اقترحت هذه الورقة استراتيجية اختيار الملفات الحاسمة (CFSS) لمعالجة وقت الاستجابة الضعيف في بيئة النسخ المتماثل السحابي. يتم استخدام محاكي سحابة يسمى CloudSim لإجراء التجارب اللازمة ، ويتم تقديم النتائج لإثبات التحسن في أداء النسخ المتماثل. تمت مناقشة الرسوم البيانية التحليلية التي تم الحصول عليها بدقة ، وأظهرت النتائج تفوق خوارزمية CFSS المقترحة على خوارزمية أخرى موجودة مع تحسن بنسبة 10.47 ٪ في متوسط ​​وقت الاستجابة لوظائف متعددة في كل جولة.Cloud Computing is a mass platform to serve high volume data from multi-devices and numerous technologies. Cloud tenants have a high demand to access their data faster without any disruptions. Therefore, cloud providers are struggling to ensure every individual data is secured and always accessible. Hence, an appropriate replication strategy capable of selecting essential data is required in cloud replication environments as the solution. This paper proposed a Crucial File Selection Strategy (CFSS) to address poor response time in a cloud replication environment. A cloud simulator called CloudSim is used to conduct the necessary experiments, and results are presented to evidence the enhancement on replication performance. The obtained analytical graphs are discussed thoroughly, and apparently, the proposed CFSS algorithm outperformed another existing algorithm with a 10.47% improvement in average response time for multiple jobs per round

    HEVC watermarking techniques for authentication and copyright applications: challenges and opportunities

    Get PDF
    Recently, High-Efficiency Video Coding (HEVC/H.265) has been chosen to replace previous video coding standards, such as H.263 and H.264. Despite the efficiency of HEVC, it still lacks reliable and practical functionalities to support authentication and copyright applications. In order to provide this support, several watermarking techniques have been proposed by many researchers during the last few years. However, those techniques are still suffering from many issues that need to be considered for future designs. In this paper, a Systematic Literature Review (SLR) is introduced to identify HEVC challenges and potential research directions for interested researchers and developers. The time scope of this SLR covers all research articles published during the last six years starting from January 2014 up to the end of April 2020. Forty-two articles have met the criteria of selection out of 343 articles published in this area during the mentioned time scope. A new classification has been drawn followed by an identification of the challenges of implementing HEVC watermarking techniques based on the analysis and discussion of those chosen articles. Eventually, recommendations for HEVC watermarking techniques have been listed to help researchers to improve the existing techniques or to design new efficient ones

    Adaptive linux-based TCP congestion control algorithm for high-speed networks

    Get PDF
    Recently, high-speed networks are widely deployed and their necessity is rapidly increasing everyday. In general, high-speed networks are deployed to provide connectivity among computing elements, storage devices and/or data centers in order to provide fast and reliable services for end-users. High-speed networks can be classified as: (1) short-distance networks, such as local area networks and data center networks, and (2) long-distance networks, such as metropolitan and wide area networks, which occasionally employ the oceanic and/or transatlantic links to provide a fast connectivity among the scattered data centers located in different places around the world. Indeed, the overall performance of such networks is significantly influenced by the Transmission Control Protocol (TCP). Although TCP is the predominant transmission protocol used in Internet, its Congestion Control Algorithm (CCA) is still unable to adapt to high-speed networks, which are not the typical environment for which most CCAs were designed. For this reason, the employment of TCP over high-speed networks causes an extreme performance degradation leads to a poor bandwidth utilization due to the unavoidable network characteristics such as small buffer, long RTT and non-congestion loss. In order to reduce the sensitivity to packet loss and to improve the ability of TCP CCA on dealing with small buffer regimes as in short-distance and low-BDP networks, this work proposes a novel loss-based TCP CCA, namely AF-based, designed for high-speed and short-distance networks. Thereafter, extensive simulation experiments are carried out to evaluate the performance of the proposed AF-based CCA compared to C-TCP and Cubic-TCP, which are the default CCAs of the most commonly used operating systems. The results show that AF-based CCA outperforms the compared CCAs in terms of average throughput, loss ratio and fairness, especially when a small buffer regime is applied. Moreover, the AF-based CCA shows lower sensitivity to the change of buffer size and packet error rate, which increases its efficiency. Further, we propose a novel mathematical model to calculate the average throughput of the AF-based CCA. The main contributions of this model are: First, to validate the simulation results of AF-based CCA by comparing them to the numerical results of this model and to the results of NewReno as a benchmark. Second, to study the impact of ʎmax parameter on the throughput and epoch time. Third, to formulate an equation to automate the configuration of ʎmax parameter in order to increase the scalability of AF-based CCA. Fortunately, the results confirm the validity of the proposed algorithm. Furthermore, we propose a new delay-based CCA to increase bandwidth utilization over long-distance networks, in which RTTs are very long, buffers are very large and packet loss is very common. This CCA contributes the novel Window-correlated Weighting Function (WWF), which correlates the value of the increase in cwnd to the magnitude of it. Thereafter, the gained increase is balanced using the weighting function according to the variation of RTT in order to maintain the fairness. Consequently, this behavior improves the ability of TCP to adapt to different long-distance network scenarios, which especially improves bandwidth utilization over high-BDP networks. Extensive simulation experiments show that WWF-based CCA achieves higher performance than the other CCAs while maintaining fairness. Moreover, it shows higher efficiency and stability than the compared CCAs, especially in the cases of big buffers which cause an additional delay. Fundamentally, TCP-based applications naturally need to deal with links of anydistance without the need of human reconfiguration. For this reason, it becomes very necessary to design an adaptive CCA, which is able to serve simultaneously any-distance networks. Thus, we propose a novel adaptive TCP CCA, namely Agile-TCP, which combines both AF-based and WWF-based approaches. This combination reduces the sensitivity to packet loss, buffer size and RTT variation, which in turn, improves the total performance of TCP over any-distance networks. Beyond that, a Linux kernel CCA module is implemented as a real product of the Agile-TCP. For evaluation purpose, a real test-bed of single dumbbell topology is carried out using the well-known Dummynet network emulator. Fortunately, the results show that Agile-TCP outperforms the compared CCAs in most scenarios, which is very promising for many application such as cloud computing and big data transfer
    corecore